Kernel adaptive filter

Kernel adaptive filtering is an adaptive filtering technique for general nonlinear problems. It is a natural generalization of linear adaptive filtering in reproducing kernel Hilbert spaces. Kernel adaptive filters are online kernel methods, closely related to some artificial neural networks such as radial basis function networks and regularization networks. Some distinguishing features include: The learning process is online, the learning process is convex with no local minima, and the learning process requires moderate complexity.

Adaptive Filtering

A linear adaptive filter is a linear filter built on basic operational units like adders and multipliers and is usually implemented by programmable digital processors. Mathematically it can be modeled by a linear combiner \mathbf{w}. Supplied with an input u\,, the output of the filter is y=\mathbf{w} ^{T} u.

\mathbf{w} is also called the linear coefficients (weights) of the filter. The dimensionality of \mathbf{w} is the filter order. A unique feature of an adaptive filter is that its coefficient can be updated online according to some optimization criterion. One common criterion is to minimize the mean square error  E [d- \mathbf{w} ^ {T} u]^2. As you see, the adaptation of the weights is a supervised learning process, which requires training data \{u, \; d\}. The updating rule is

\mathbf{w} (i) = \mathbf{w}(i-1) %2B \mathbf{g}(i) e(i)

where \mathbf{w}(i-1) is the filter weight at time i-1. The error  e\,(i) is the prediction error of \mathbf{w}(i-1) on the i-th datum \{u(i), \; d(i) \}

e(i) = d(i) - \mathbf{w} (i-1)^T u(i)

And \mathbf{g}(i) is the algorithm gain, which can assume different formats in different algorithms. The most notable adaptive filters include least mean squares filter and recursive least squares filter. Despite their simple structure (and probably because of it), they enjoy wide applicability and successes in diverse fields such as communications, control, radar, sonar, seismology, and biomedical engineering, among others. The theory of linear adaptive filters has reached a highly mature stage of development. However, the same can not be said about nonlinear adaptive filters.

Adaptive Filtering in Reproducing Kernel Hilbert Spaces

Kernel adaptive filters are linear adaptive filters in reproducing kernel Hilbert spaces. They belong to a more general methodology called kernel methods. The main idea of kernel methods can be summarized as follows: transform the input data into a high-dimensional feature space via a positive definite kernel such that the inner product operation in the feature space can be computed efficiently through the kernel evaluation. Then appropriate linear methods are subsequently applied on the transformed data. As long as we can formulate the algorithm in terms of inner product (or equivalent kernel evaluation), we never explicitly have to compute in the high dimensional feature space. While this methodology is called kernel trick, we have to point out that the underlying reproducing kernel Hilbert space plays a central role to provide linearity, convexity, and universal approximation capability at the same time. Successful examples of this methodology include support vector machines, principal component analysis, Fisher discriminant analysis and many others.

Kernel adaptive filters include kernel least mean square, kernel affine projection algorithms, kernel recursive least squares, extended kernel recursive least squares and kernel Kalman filter. Viewed as a learning problem, kernel adaptive filters aim to estimate \,f\, sequentially by minimizing  E [d- f(u)]^2\,. The general updating rule of a kernel adaptive filter is

f_i = f_{i-1} %2B \mathbf{g}(i) e(i)

where f_{i-1}\, is the estimate at time i\,-1.  e\,(i) is the prediction error of f_{i-1}\, on the  i\; th datum.

Kernel adaptive filters provide a new perspective for linear adaptive filters since linear adaptive filters become a special case being alternatively expressed in the dual space. Kernel adaptive filters clearly show that there is a growing memory structure embedded in the filter weights. They naturally create a growing radial basis function network, learning the network topology and adapting the free parameters directly from data at the same time. The learning rule is a beautiful combination of the error-correction and memory-based learning, and potentially it will have a deep impact on our understanding about the essence of learning theory.

References